What is limiter?
The 'limiter' npm package is used to control the rate of operations in Node.js applications. It allows you to limit the number of operations that can be performed over a given period of time, which is useful for rate-limiting API requests, managing resource usage, and preventing abuse.
What are limiter's main functionalities?
Basic Rate Limiting
This feature allows you to limit the number of operations to a specified number per interval. In this example, the limiter allows 5 requests per second. If the limit is exceeded, it logs 'Rate limit exceeded'.
const { RateLimiter } = require('limiter');
const limiter = new RateLimiter({ tokensPerInterval: 5, interval: 'second' });
async function makeRequest() {
const remainingRequests = await limiter.removeTokens(1);
if (remainingRequests >= 0) {
console.log('Request allowed');
} else {
console.log('Rate limit exceeded');
}
}
makeRequest();
Custom Intervals
This feature allows you to set custom intervals for rate limiting. In this example, the limiter allows 10 requests per minute.
const { RateLimiter } = require('limiter');
const limiter = new RateLimiter({ tokensPerInterval: 10, interval: 60000 }); // 10 requests per minute
async function makeRequest() {
const remainingRequests = await limiter.removeTokens(1);
if (remainingRequests >= 0) {
console.log('Request allowed');
} else {
console.log('Rate limit exceeded');
}
}
makeRequest();
Rate Limiting with Bursts
This feature allows for bursts of requests to be made immediately up to the limit, and then enforces the rate limit. In this example, the limiter allows 5 requests per second and can handle bursts.
const { RateLimiter } = require('limiter');
const limiter = new RateLimiter({ tokensPerInterval: 5, interval: 'second', fireImmediately: true });
async function makeRequest() {
const remainingRequests = await limiter.removeTokens(1);
if (remainingRequests >= 0) {
console.log('Request allowed');
} else {
console.log('Rate limit exceeded');
}
}
makeRequest();
Other packages similar to limiter
bottleneck
Bottleneck is a powerful rate limiter and scheduler for Node.js and the browser. It provides more advanced features like clustering, priority queues, and reservoir management. Compared to 'limiter', Bottleneck offers more flexibility and control over rate limiting and task scheduling.
rate-limiter-flexible
Rate-limiter-flexible is a highly configurable rate limiter for Node.js. It supports various storage options like Redis, MongoDB, and in-memory. It offers more advanced features such as penalty and reward mechanisms, and different rate limiting strategies. It is more versatile compared to 'limiter'.
express-rate-limit
Express-rate-limit is a middleware for Express.js applications to limit repeated requests to public APIs. It is specifically designed for use with Express and provides easy integration with minimal configuration. It is more specialized for Express.js compared to 'limiter'.
limiter
Provides a generic rate limiter for the web and node.js. Useful for API clients,
web crawling, or other tasks that need to be throttled. Two classes are exposed,
RateLimiter and TokenBucket. TokenBucket provides a lower level interface to
rate limiting with a configurable burst rate and drip rate. RateLimiter sits on
top of the token bucket and adds a restriction on the maximum number of tokens
that can be removed each interval to comply with common API restrictions such as
"150 requests per hour maximum".
Installation
yarn install limiter
Usage
A simple example allowing 150 requests per hour:
import { RateLimiter } from "limiter";
const limiter = new RateLimiter({ tokensPerInterval: 150, interval: "hour" });
async function sendRequest() {
const remainingRequests = await limiter.removeTokens(1);
callMyRequestSendingFunction(...);
}
Another example allowing one message to be sent every 250ms:
import { RateLimiter } from "limiter";
const limiter = new RateLimiter({ tokensPerInterval: 1, interval: 250 });
async function sendMessage() {
const remainingMessages = await limiter.removeTokens(1);
callMyMessageSendingFunction(...);
}
The default behaviour is to wait for the duration of the rate limiting that's
currently in effect before the promise is resolved, but if you pass in
"fireImmediately": true
, the promise will be resolved immediately with
remainingRequests
set to -1:
import { RateLimiter } from "limiter";
const limiter = new RateLimiter({
tokensPerInterval: 150,
interval: "hour",
fireImmediately: true
});
async function requestHandler(request, response) {
const remainingRequests = await limiter.removeTokens(1);
if (remainingRequests < 0) {
response.writeHead(429, {'Content-Type': 'text/plain;charset=UTF-8'});
response.end('429 Too Many Requests - your IP is being rate limited');
} else {
callMyMessageSendingFunction(...);
}
}
A synchronous method, tryRemoveTokens(), is available in both RateLimiter and
TokenBucket. This will return immediately with a boolean value indicating if the
token removal was successful.
import { RateLimiter } from "limiter";
const limiter = new RateLimiter({ tokensPerInterval: 10, interval: "second" });
if (limiter.tryRemoveTokens(5))
console.log('Tokens removed');
else
console.log('No tokens removed');
To get the number of remaining tokens outside the removeTokens
promise,
simply use the getTokensRemaining
method.
import { RateLimiter } from "limiter";
const limiter = new RateLimiter({ tokensPerInterval: 1, interval: 250 });
console.log(limiter.getTokensRemaining());
Using the token bucket directly to throttle at the byte level:
import { TokenBucket } from "limiter";
const BURST_RATE = 1024 * 1024 * 150;
const FILL_RATE = 1024 * 1024 * 50;
const bucket = new TokenBucket({
bucketSize: BURST_RATE,
tokensPerInterval: FILL_RATE,
interval: "second"
});
async function handleData(myData) {
await bucket.removeTokens(myData.byteLength);
sendMyData(myData);
}
Additional Notes
Both the token bucket and rate limiter should be used with a message queue or
some way of preventing multiple simultaneous calls to removeTokens().
Otherwise, earlier messages may get held up for long periods of time if more
recent messages are continually draining the token bucket. This can lead to
out of order messages or the appearance of "lost" messages under heavy load.
License
MIT License